A new method for solving the wave equation is presented, called the learned Born series (LBS), which is derived from a convergent Born Series but its components are found through training. The LBS is shown to be significantly more accurate than the convergent Born series for the same number of iterations, in the presence of high contrast scatterers, while maintaining a comparable computational complexity. The LBS is able to generate a reasonable prediction of the global pressure field with a small number of iterations, and the errors decrease with the number of learned iterations.
translated by 谷歌翻译
我们提出了一个开源的可区分的声学模拟器J-Wave,可以解决时变和时谐音的声学问题。它支持自动差异化,这是一种具有许多应用程序的程序转换技术,尤其是在机器学习和科学计算中。J-Wave由模块化组件组成,可以轻松定制和重复使用。同时,它与一些最受欢迎的机器学习库(例如JAX和TensorFlow)兼容。对于广泛使用的K-Wave工具箱和一系列声学仿真软件,评估了已知配置的仿真结果的准确性。可从https://github.com/ucl-bug/jwave获得J-Wave。
translated by 谷歌翻译
许多介入外科手术依赖于医学成像来可视化和跟踪仪器。这种成像方法不仅需要实时能力,而且还提供准确且强大的位置信息。在超声应用中,通常只有来自线性阵列的二维数据可用,并且由于以下三维中的精确位置估计是非微不足道的。在这项工作中,我们首先使用现实的合成训练数据训练神经网络,以估计对象与重建的超声图像中的相关轴向像差的平面外偏移。然后将获得的估计与卡尔曼滤波方法组合,该方法利用先前的时间框架中获得的定位估计来改善本地化鲁棒性并降低测量噪声的影响。使用模拟评估所提出的方法的准确性,并在使用新型光学超声成像设置获得的实验数据上证明了其实际适用性。实时提供准确和强大的位置信息。对于模拟数据的平均误差为0.1mm的平均误差,对于实验数据的平均误差为0.1mm的平均误差,轴向和横向坐标估计。三维定位最精确地高于1mm的高距距离,最大距离为25mm孔径为5mm。
translated by 谷歌翻译
可微分的模拟器是一个新兴的概念,其中包括几个领域的应用,从加固学习到最佳控制。它们的显着特征是能够在输入参数上计算分析梯度。与神经网络一样,通过构成称为层的若干构建块而构建,模拟通常需要计算操作员的输出,其本身可以将其自身分解成在一起的基本单元。虽然神经网络的每层代表特定的离散操作,但是相同的操作员可以具有多个表示,这取决于所采用的离散化和需要解决的研究问题。这里,我们提出了一种简单的设计模式来构造可分辨率的运算符和离散化的库,通过代表运算符作为连续功能的家庭之间的映射,由有限载体参数化。我们展示了声学优化问题上的方法,其中使用傅立叶光谱方法离散化的亥姆霍兹方程,并且使用梯度下降来说明可分辨率,以优化声透镜的声音速度。建议的框架是开放的,可用于\ url {https:/github.com/ucl-bug/jaxdf}
translated by 谷歌翻译
基于深度学习的图像重建方法在许多成像方式中表现出令人印象深刻的经验表现。这些方法通常需要大量的高质量配对训练数据,这在医学成像中通常不可用。为了解决这个问题,我们为贝叶斯框架内的学习重建提供了一种新颖的无监督知识转移范式。提出的方法分为两个阶段学习重建网络。第一阶段训练一个重建网络,其中包括一组有序对,包括椭圆的地面真相图像和相应的模拟测量数据。第二阶段微调在没有监督的情况下将经过验证的网络用于更现实的测量数据。通过构造,该框架能够通过重建图像传递预测性不确定性信息。我们在低剂量和稀疏视图计算机断层扫描上提出了广泛的实验结果,表明该方法与几种最先进的监督和无监督的重建技术具有竞争力。此外,对于与培训数据不同的测试数据,与仅在合成数据集中训练的学习方法相比,所提出的框架不仅在视觉上可以显着提高重建质量,而且在PSNR和SSIM方面也可以显着提高重建质量。
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译
Tobacco origin identification is significantly important in tobacco industry. Modeling analysis for sensor data with near infrared spectroscopy has become a popular method for rapid detection of internal features. However, for sensor data analysis using traditional artificial neural network or deep network models, the training process is extremely time-consuming. In this paper, a novel broad learning system with Takagi-Sugeno (TS) fuzzy subsystem is proposed for rapid identification of tobacco origin. Incremental learning is employed in the proposed method, which obtains the weight matrix of the network after a very small amount of computation, resulting in much shorter training time for the model, with only about 3 seconds for the extra step training. The experimental results show that the TS fuzzy subsystem can extract features from the near infrared data and effectively improve the recognition performance. The proposed method can achieve the highest prediction accuracy (95.59 %) in comparison to the traditional classification algorithms, artificial neural network, and deep convolutional neural network, and has a great advantage in the training time with only about 128 seconds.
translated by 谷歌翻译
Accurate modeling of ship performance is crucial for the shipping industry to optimize fuel consumption and subsequently reduce emissions. However, predicting the speed-power relation in real-world conditions remains a challenge. In this study, we used in-service monitoring data from multiple vessels with different hull shapes to compare the accuracy of data-driven machine learning (ML) algorithms to traditional methods for assessing ship performance. Our analysis consists of two main parts: (1) a comparison of sea trial curves with calm-water curves fitted on operational data, and (2) a benchmark of multiple added wave resistance theories with an ML-based approach. Our results showed that a simple neural network outperformed established semi-empirical formulas following first principles. The neural network only required operational data as input, while the traditional methods required extensive ship particulars that are often unavailable. These findings suggest that data-driven algorithms may be more effective for predicting ship performance in practical applications.
translated by 谷歌翻译
As a common appearance defect of concrete bridges, cracks are important indices for bridge structure health assessment. Although there has been much research on crack identification, research on the evolution mechanism of bridge cracks is still far from practical applications. In this paper, the state-of-the-art research on intelligent theories and methodologies for intelligent feature extraction, data fusion and crack detection based on data-driven approaches is comprehensively reviewed. The research is discussed from three aspects: the feature extraction level of the multimodal parameters of bridge cracks, the description level and the diagnosis level of the bridge crack damage states. We focus on previous research concerning the quantitative characterization problems of multimodal parameters of bridge cracks and their implementation in crack identification, while highlighting some of their major drawbacks. In addition, the current challenges and potential future research directions are discussed.
translated by 谷歌翻译
Two approaches to AI, neural networks and symbolic systems, have been proven very successful for an array of AI problems. However, neither has been able to achieve the general reasoning ability required for human-like intelligence. It has been argued that this is due to inherent weaknesses in each approach. Luckily, these weaknesses appear to be complementary, with symbolic systems being adept at the kinds of things neural networks have trouble with and vice-versa. The field of neural-symbolic AI attempts to exploit this asymmetry by combining neural networks and symbolic AI into integrated systems. Often this has been done by encoding symbolic knowledge into neural networks. Unfortunately, although many different methods for this have been proposed, there is no common definition of an encoding to compare them. We seek to rectify this problem by introducing a semantic framework for neural-symbolic AI, which is then shown to be general enough to account for a large family of neural-symbolic systems. We provide a number of examples and proofs of the application of the framework to the neural encoding of various forms of knowledge representation and neural network. These, at first sight disparate approaches, are all shown to fall within the framework's formal definition of what we call semantic encoding for neural-symbolic AI.
translated by 谷歌翻译